Numerical Analysis
Numerical Analysis
2.I.5E
Part IB, 2001 commentFind an LU factorization of the matrix
and use it to solve the linear system , where
2.II.14E
Part IB, 2001 comment(a) Let be an positive-definite, symmetric matrix. Define the Cholesky factorization of and prove that it is unique.
(b) Let be an matrix, , such that . Prove the uniqueness of the "skinny QR factorization"
where the matrix is with orthonormal columns, while is an upper-triangular matrix with positive diagonal elements.
[Hint: Show that you may choose as a matrix that features in the Cholesky factorization of .]
3.I.6E
Part IB, 2001 commentGiven , let the th-degree polynomial interpolate the values , , where are distinct. Given , find the error in terms of a derivative of .
3.II.16E
Part IB, 2001 commentLet the monic polynomials , be orthogonal with respect to the weight function , where the degree of each is exactly .
(a) Prove that each , has distinct zeros in the interval .
(b) Suppose that the satisfy the three-term recurrence relation
where . Set
Prove that , and deduce that all the eigenvalues of reside in .
2.I.5B
Part IB, 2002 commentApplying the Gram-Schmidt orthogonalization, compute a "skinny"
QR-factorization of the matrix
i.e. find a matrix with orthonormal columns and an upper triangular matrix such that .
2.II.14B
Part IB, 2002 commentLet and let distinct points be given.
(a) Define the divided difference of order in terms of interpolating polynomials. Prove that it is a symmetric function of the variables .
(b) Prove the recurrence relation
(c) Hence or otherwise deduce that, for any , we have
(d) From the formulas above, show that, for any ,
where .
3.I.6B
Part IB, 2002 commentFor numerical integration, a quadrature formula
is applied which is exact on , i.e., for all polynomials of degree .
Prove that such a formula is exact for all if and only if , are the zeros of an orthogonal polynomial which satisfies for all . [You may assume that has distinct zeros.]
3.II.16B
Part IB, 2002 comment(a) Consider a system of linear equations with a non-singular square matrix . To determine its solution we apply the iterative method
Here , while the matrix is such that implies . The initial vector is arbitrary. Prove that, if the matrix possesses linearly independent eigenvectors whose corresponding eigenvalues satisfy , then the method converges for any choice of , i.e. as .
(b) Describe the Jacobi iteration method for solving . Show directly from the definition of the method that, if the matrix is strictly diagonally dominant by rows, i.e.
then the method converges.
2.I.5B
Part IB, 2003 commentLet
Find the LU factorization of the matrix and use it to solve the system .
2.II.14B
Part IB, 2003 commentLet
be an approximation of the second derivative which is exact for , the set of polynomials of degree , and let
be its error.
(a) Determine the coefficients .
(b) Using the Peano kernel theorem prove that, for , the set of threetimes continuously differentiable functions, the error satisfies the inequality
3.I.6B
Part IB, 2003 commentGiven distinct points , let
be the fundamental Lagrange polynomials of degree , let
and let be any polynomial of degree .
(a) Prove that .
(b) Hence or otherwise derive the formula
which is the decomposition of into partial fractions.
3.II.16B
Part IB, 2003 commentThe functions are generated by the Rodrigues formula:
(a) Show that is a polynomial of degree , and that the are orthogonal with respect to the scalar product
(b) By induction or otherwise, prove that the satisfy the three-term recurrence relation
[Hint: you may need to prove the equality as well.]
2.I.9A
Part IB, 2004 commentDetermine the coefficients of Gaussian quadrature for the evaluation of the integral
that uses two function evaluations.
2.II.20A
Part IB, 2004 commentGiven an matrix and , prove that the vector is the solution of the least-squares problem for if and only if . Let
Determine the solution of the least-squares problem for .
3.I.11A
Part IB, 2004 commentThe linear system
where real and are given, is solved by the iterative procedure
Determine the conditions on that guarantee convergence.
3.II.22A
Part IB, 2004 commentGiven , we approximate by the linear combination
By finding the Peano kernel, determine the least constant such that
1.I.6F
Part IB, 2005 commentDetermine the Cholesky factorization (without pivoting) of the matrix
where is a real parameter. Hence, find the range of values of for which the matrix is positive definite.
2.II.18F
Part IB, 2005 comment(a) Let be a set of polynomials orthogonal with respect to some inner product in the interval . Write explicitly the least-squares approximation to by an th-degree polynomial in terms of the polynomials .
(b) Let an inner product be defined by the formula
Determine the th degree polynomial approximation of with respect to this inner product as a linear combination of the underlying orthogonal polynomials.
3.II.19F
Part IB, 2005 commentGiven real , we consider the matrix
Construct the Jacobi and Gauss-Seidel iteration matrices originating in the solution of the linear system .
Determine the range of real for which each iterative procedure converges.
4.I.8F
Part IB, 2005 commentDefine Gaussian quadrature.
Evaluate the coefficients of the Gaussian quadrature of the integral
which uses two function evaluations.
1.I.6D
Part IB, 2006 comment(a) Perform the LU-factorization with column pivoting of the matrix
(b) Explain briefly why every nonsingular matrix admits an LU-factorization with column pivoting.
2.II.18D
Part IB, 2006 comment(a) For a positive weight function , let
be the corresponding Gaussian quadrature with nodes. Prove that all the coefficients are positive.
(b) The integral
is approximated by a quadrature
which is exact on polynomials of degree and has positive coefficients . Prove that, for any continuous on , the quadrature converges to the integral, i.e.,
[You may use the Weierstrass theorem: for any continuous on , and for any , there exists a polynomial of degree such that
3.II.19D
Part IB, 2006 comment(a) Define the QR factorization of a rectangular matrix and explain how it can be used to solve the least squares problem of finding an such that
and the norm is the Euclidean distance .
(b) Define a Householder transformation (reflection) and prove that is an orthogonal matrix.
(c) Using Householder reflection, solve the least squares problem for the case
and find the value of .
4.I.8D
Part IB, 2006 comment(a) Given the data
\begin{tabular}{c|r|r|r|r} & & 0 & 1 & 3 \ \hline & & & & 9 \end{tabular}
find the interpolating cubic polynomial in the Newton form, and transform it to the power form.
(b) We add to the data one more value at . Find the power form of the interpolating quartic polynomial to the extended data
\begin{tabular}{c|c|r|r|r|r} & & 0 & 1 & 2 & 3 \ \hline & & & & & 9 \end{tabular}
1.I.6F
Part IB, 2007 commentSolve the least squares problem
using method with Householder transformation. (A solution using normal equations is not acceptable.)
2.II.18F
Part IB, 2007 commentFor a symmetric, positive definite matrix with the spectral radius , the linear system is solved by the iterative procedure
where is a real parameter. Find the range of that guarantees convergence of to the exact solution for any choice of .
3.II.19F
Part IB, 2007 commentProve that the monic polynomials , orthogonal with respect to a given weight function on , satisfy the three-term recurrence relation
where . Express the values and in terms of and and show that .
4.I.8F
Part IB, 2007 commentGiven , we approximate by the linear combination
Using the Peano kernel theorem, determine the least constant in the inequality
and give an example of for which the inequality turns into equality.
1.I.6D
Part IB, 2008 commentShow that if , where is a lower triangular matrix with all elements on the main diagonal being unity and is a diagonal matrix with positive elements, then is positive definite. Find and the corresponding when
2.II.18D
Part IB, 2008 comment(a) A Householder transformation (reflection) is given by
where , and is the unit matrix and is a non-zero vector which has norm . Show that is orthogonal.
(b) Suppose that and with . Show that if minimises then it also minimises , where is an arbitrary orthogonal matrix.
(c) Using Householder reflection, find the that minimises when
3.II.19D
Part IB, 2008 commentStarting from the Taylor formula for with an integral remainder term, show that the error of an approximant can be written in the form (Peano kernel theorem)
when , which is identically zero if is a polynomial of degree , satisfies conditions that you should specify. Give an expression for .
Hence determine the minimum value of in the inequality
when
4.I.8D
Part IB, 2008 commentShow that the Chebyshev polynomials, obey the orthogonality relation
State briefly how an optimal choice of the parameters is made in the Gaussian quadrature formula
Find these parameters for the case .
Paper 1, Section I, C
Part IB, 2009 commentThe real non-singular matrix is written in the form , where the matrices are diagonal and non-singular, strictly uppertriangular and strictly lower-triangular respectively.
Given , the Jacobi iteration for solving is
where the th iterate is . Show that the iteration converges to the solution of , independent of the starting choice , if and only if the spectral radius of the matrix is less than 1 .
Hence find the range of values of the real number for which the iteration will converge when
Paper 4, Section I, C
Part IB, 2009 commentSuppose that for all . The weights and nodes are chosen so that the Gaussian quadrature formula
is exact for every polynomial of degree . Show that the are all positive.
When and , the first three underlying orthogonal polynomials are , and . Find and when .
Paper 2, Section II, C
Part IB, 2009 commentThe real orthogonal matrix with is a Givens rotation with rotation angle . Write down the form of .
Show that for any matrix it is possible to choose such that the matrix satisfies for any , where .
Let
By applying a sequence of Givens rotations of the form , chosen to reduce the elements in the first column below the main diagonal to zero, find a factorisation of the matrix of the form , where is an orthogonal matrix and is an upper-triangular matrix for which the leading non-zero element in each row is positive.
Paper 3, Section II, C
Part IB, 2009 commentStarting from Taylor's theorem with integral form of the remainder, prove the Peano kernel theorem: the error of an approximant applied to can be written in the form
You should specify the form of . Here it is assumed that is identically zero when is a polynomial of degree . State any other necessary conditions.
Setting and , find and show that it is negative for when
Hence determine the minimum value of for which
holds for all .
Paper 1, Section I, C
Part IB, 2010 commentObtain the Cholesky decompositions of
What is the minimum value of for to be positive definite? Verify that if then is positive definite.
Paper 4, Section I, C
Part IB, 2010 commentSuppose are pointwise distinct and is continuous on . For define
and for
Show that is a polynomial of order which interpolates at .
Given and , determine the interpolating polynomial.
Paper 1, Section II, 18C
Part IB, 2010 commentLet
be an inner product. The Hermite polynomials are polynomials in of degree with leading term which are orthogonal with respect to the inner product, with
and . Find a three-term recurrence relation which is satisfied by and for . [You may assume without proof that
Next let be the distinct zeros of and for define the Lagrangian polynomials
associated with these points. Prove that if .
Paper 2, Section II, C
Part IB, 2010 commentConsider the initial value problem for an autonomous differential equation
and its approximation on a grid of points . Writing , it is proposed to use one of two Runge-Kutta schemes defined by
where and
What is the order of each scheme? Determine the -stability of each scheme.
Paper 3, Section II, C
Part IB, 2010 commentDefine the QR factorization of an matrix and explain how it can be used to solve the least squares problem of finding the which minimises where , and the norm is the Euclidean one.
Define a Householder (reflection) transformation and show that it is an orthogonal matrix.
Using a Householder reflection, solve the least squares problem for
giving both and .
Paper 1, Section I, B
Part IB, 2011 commentOrthogonal monic polynomials are defined with respect to the inner product , where is of degree . Show that such polynomials obey a three-term recurrence relation
for appropriate choices of and .
Now suppose that is an even function of . Show that the are even or odd functions of according to whether is even or odd.
Paper 4, Section I, B
Part IB, 2011 commentConsider the multistep method for numerical solution of the differential equation :
What does it mean to say that the method is of order , and that the method is convergent?
Show that the method is of order if
and give the conditions on that ensure convergence.
Hence determine for what values of and the the two-step method
is (a) convergent, and (b) of order 3 .
Paper 1, Section II, B
Part IB, 2011 commentConsider a function defined on the domain . Find constants , so that for any fixed ,
is exactly satisfied for polynomials of degree less than or equal to two.
By using the Peano kernel theorem, or otherwise, show that
where . Thus show that
Paper 2, Section II, B
Part IB, 2011 commentWhat is the -decomposition of a matrix A? Explain how to construct the matrices and by the Gram-Schmidt procedure, and show how the decomposition can be used to solve the matrix equation when is a square matrix.
Why is this procedure not useful for numerical decomposition of large matrices? Give a brief description of an alternative procedure using Givens rotations.
Find a -decomposition for the matrix
Is your decomposition unique? Use the decomposition you have found to solve the equation
Paper 3, Section II, B
Part IB, 2011 commentA Gaussian quadrature formula provides an approximation to the integral
which is exact for all that are polynomials of degree .
Write down explicit expressions for the in terms of integrals, and explain why it is necessary that the are the zeroes of a (monic) polynomial of degree that satisfies for any polynomial of degree less than
The first such polynomials are . Show that the Gaussian quadrature formulae for are
Verify the result for by considering .
Paper 1, Section I, D
Part IB, 2012 commentFind the LU factorization of the matrix and use it to solve the system via forward and backward substitution. [Other methods of solution are not acceptable.]
Paper 4, Section I, D
Part IB, 2012 commentState the Dahlquist equivalence theorem regarding convergence of a multistep method.
The multistep method, with a real parameter ,
is of order 2 for any , and also of order 3 for . Determine all values of for which the method is convergent, and find the order of convergence.
Paper 1, Section II, D
Part IB, 2012 commentFor a numerical method for solving , define the linear stability domain, and state when such a method is A-stable.
Determine all values of the real parameter for which the Runge-Kutta method
is A-stable.
Paper 3, Section II, D
Part IB, 2012 commentDefine the QR factorization of an matrix and explain how it can be used to solve the least squares problem of finding the vector which minimises , where , and the norm is the Euclidean one.
Define a Householder transformation and show that it is an orthogonal matrix.
Using a Householder transformation, solve the least squares problem for
giving both and .
Paper 2, Section II, D
Part IB, 2012 commentLet be the sequence of monic polynomials of degree orthogonal on the interval with respect to the weight function .
Prove that each has distinct zeros in the interval .
Let , and let satisfy the following three-term recurrence relation:
Set
Prove that , and deduce that all the eigenvalues of are distinct and reside in .
Paper 1, Section I, C
Part IB, 2013 commentDetermine the nodes of the two-point Gaussian quadrature
and express the coefficients in terms of . [You don't need to find numerical values of the coefficients.]
Paper 4, Section I, C
Part IB, 2013 commentFor a continuous function , and distinct points , define the divided difference of order .
Given points , let be the polynomial of degree that interpolates at these points. Prove that can be written in the Newton form
Paper 1, Section II, C
Part IB, 2013 commentDefine the QR factorization of an matrix and explain how it can be used to solve the least squares problem of finding the vector which minimises , where , and the norm is the Euclidean one.
Define a Givens rotation and show that it is an orthogonal matrix.
Using a Givens rotation, solve the least squares problem for
giving both and .
Paper 3, Section II, C
Part IB, 2013 commentbe a formula of numerical differentiation which is exact on polynomials of degree 2 , and let
be its error.
Find the values of the coefficients .
Using the Peano kernel theorem, find the least constant such that, for all functions , we have
Paper 2, Section II, C
Part IB, 2013 commentExplain briefly what is meant by the convergence of a numerical method for solving the ordinary differential equation
Prove from first principles that if the function is sufficiently smooth and satisfies the Lipschitz condition
for some , then the backward Euler method
converges and find the order of convergence.
Find the linear stability domain of the backward Euler method.
Paper 1, Section I,
Part IB, 2014 comment(i) A general multistep method for the numerical approximation to the scalar differential equation is given by
where . Show that this method is of order if and only if
where
(ii) A particular three-step implicit method is given by
where the are chosen to make the method third order. [The need not be found.] For what values of is the method convergent?
Paper 4, Section I, C
Part IB, 2014 commentConsider the quadrature given by
for , disjoint and . Show that it is not possible to make this quadrature exact for all polynomials of order .
For the case that and , by considering orthogonal polynomials find suitable and that make the quadrature exact on cubic polynomials.
[Hint: and ]
Paper 1, Section II, C
Part IB, 2014 commentDefine a Householder transformation and show that it is an orthogonal matrix. Briefly explain how these transformations can be used for QR factorisation of an matrix.
Using Householder transformations, find a QR factorisation of
Using this factorisation, find the value of for which
has a unique solution .
Paper 3, Section II, C
Part IB, 2014 commentA Runge-Kutta scheme is given by
for the solution of an autonomous differential equation , where is a real parameter. What is the order of the scheme? Identify all values of for which the scheme is A-stable. Determine the linear stability domain for this range.
Paper 2, Section II, C
Part IB, 2014 commentA linear functional acting on is approximated using a linear scheme . The approximation is exact when is a polynomial of degree . The error is given by . Starting from the Taylor formula for with an integral remainder term, show that the error can be written in the form
subject to a condition on that you should specify. Give an expression for .
Find and such that the approximation scheme
is exact for all that are polynomials of degree 2 . Assuming , apply the Peano kernel theorem to the error. Find and sketch for .
Find the minimum values for the constants and for which
and show explicitly that both error bounds hold for .
Paper 1, Section I, 6D
Part IB, 2015 commentLet
where is a real parameter. Find the factorization of the matrix . Give the constraint on for A to be positive definite.
For , use this factorization to solve the system via forward and backward substitution.
Paper 4, Section I, D
Part IB, 2015 commentGiven distinct points , let be the real polynomial of degree that interpolates a continuous function at these points. State the Lagrange interpolation formula.
Prove that can be written in the Newton form
where is the divided difference, which you should define. [An explicit expression for the divided difference is not required.]
Explain why it can be more efficient to use the Newton form rather than the Lagrange formula.
Paper 1, Section II, 18D
Part IB, 2015 commentDetermine the real coefficients such that
is exact when is any real polynomial of degree 2 . Check explicitly that the quadrature is exact for with these coefficients.
State the Peano kernel theorem and define the Peano kernel . Use this theorem to show that if , and are chosen as above, then
Paper 3, Section II, D
Part IB, 2015 commentDefine the QR factorization of an matrix . Explain how it can be used to solve the least squares problem of finding the vector which minimises , where , and is the Euclidean norm.
Explain how to construct and by the Gram-Schmidt procedure. Why is this procedure not useful for numerical factorization of large matrices?
Let
Using the Gram-Schmidt procedure find a QR decomposition of A. Hence solve the least squares problem giving both and .
Paper 2, Section II, D
Part IB, 2015 commentDefine the linear stability domain for a numerical method to solve . What is meant by an -stable method? Briefly explain the relevance of these concepts in the numerical solution of ordinary differential equations.
Consider
where . What is the order of this method?
Find the linear stability domain of this method. For what values of is the method A-stable?
Paper 1, Section I, D
Part IB, 2016 comment(a) What are real orthogonal polynomials defined with respect to an inner product What does it mean for such polynomials to be monic?
(b) Real monic orthogonal polynomials, , of degree , are defined with respect to the inner product,
where is a positive weight function. Show that such polynomials obey the three-term recurrence relation,
for appropriate and which should be given in terms of inner products.
Paper 4, Section I, D
Part IB, 2016 comment(a) Define the linear stability domain for a numerical method to solve . What is meant by an A-stable method?
(b) A two-stage Runge-Kutta scheme is given by
where is the step size and . Show that the order of this scheme is at least two. For this scheme, find the intersection of the linear stability domain with the real axis. Hence show that this method is not A-stable.
Paper 1, Section II, D
Part IB, 2016 comment(a) Consider a method for numerically solving an ordinary differential equation (ODE) for an initial value problem, . What does it mean for a method to converge over where ? What is the definition of the order of a method?
(b) A general multistep method for the numerical solution of an ODE is
where is a fixed positive integer. Show that this method is at least of order if and only if
(c) State the Dahlquist equivalence theorem regarding the convergence of a multistep method.
(d) Consider the multistep method,
Determine the values of and (in terms of the real parameter ) such that the method is at least third order. For what values of does the method converge?
Paper 3, Section II, D
Part IB, 2016 comment(a) Determine real quadratic functions such that the interpolation formula,
is exact when is any real polynomial of degree 2 .
(b) Use this formula to construct approximations for and which are exact when is any real polynomial of degree 2 . Calculate these approximations for and comment on your answers.
(c) State the Peano kernel theorem and define the Peano kernel . Use this theorem to find the minimum values of the constants and such that
and
where . Check that these inequalities hold for .
Paper 2, Section II, D
Part IB, 2016 comment(a) Define a Givens rotation and show that it is an orthogonal matrix.
(b) Define a QR factorization of a matrix with . Explain how Givens rotations can be used to find and .
(c) Let
(i) Find a QR factorization of using Givens rotations.
(ii) Hence find the vector which minimises , where is the Euclidean norm. What is ?
Paper 1, Section I, C
Part IB, 2017 commentGiven real points , define the Lagrange cardinal polynomials . Let be the polynomial of degree that interpolates the function at these points. Express in terms of the values , and the Lagrange cardinal polynomials.
Define the divided difference and give an expression for it in terms of and . Prove that
for some number .
Paper 4, Section I, C
Part IB, 2017 commentFor the matrix
find a factorization of the form
where is diagonal and is lower triangular with ones on its diagonal.
For what values of is positive definite?
In the case find the Cholesky factorization of .
Paper 1, Section II, C
Part IB, 2017 commentA three-stage explicit Runge-Kutta method for solving the autonomous ordinary differential equation
is given by
where
and is the time-step. Derive sufficient conditions on the coefficients , and for the method to be of third order.
Assuming that these conditions hold, verify that belongs to the linear stability domain of the method.
Paper 2, Section II, C
Part IB, 2017 commentDefine the linear least-squares problem for the equation , where is an matrix with is a given vector and is an unknown vector.
If , where is an orthogonal matrix and is an upper triangular matrix in standard form, explain why the least-squares problem is solved by minimizing the Euclidean norm .
Using the method of Householder reflections, find a QR factorization of the matrix
Hence find the solution of the least-squares problem in the case
Paper 3, Section II, C
Part IB, 2017 commentLet be the th monic orthogonal polynomial with respect to the inner product
on , where is a positive weight function.
Prove that, for has distinct zeros in the interval .
Let be distinct points. Show that the quadrature formula
is exact for all if the weights are chosen to be
Show further that the quadrature formula is exact for all if the nodes are chosen to be the zeros of (Gaussian quadrature). [Hint: Write as , where .]
Use the Peano kernel theorem to write an integral expression for the approximation error of Gaussian quadrature for sufficiently differentiable functions. (You should give a formal expression for the Peano kernel but are not required to evaluate it.)
Paper 1, Section I, D
Part IB, 2018 commentThe Trapezoidal Rule for solving the differential equation
is defined by
where .
Determine the minimum order of convergence of this rule for general functions that are sufficiently differentiable. Show with an explicit example that there is a function for which the local truncation error is for some constant .
Paper 4 , Section I, D
Part IB, 2018 commentwhere and are real parameters. Find the factorisation of the matrix . For what values of does the equation have a unique solution for ?
For , use the decomposition with forward and backward substitution to determine a value for for which a solution to exists. Find the most general solution to the equation in this case.
Paper 1, Section II, D
Part IB, 2018 commentShow that if then the matrix transformation
is orthogonal. Show further that, for any two vectors of equal length,
Explain how to use such transformations to convert an matrix with into the form , where is an orthogonal matrix and is an upper-triangular matrix, and illustrate the method using the matrix
Paper 3, Section II, D
Part IB, 2018 commentTaylor's theorem for functions is given in the form
Use integration by parts to show that
Let be a linear functional on such that for . Show that
where the Peano kernel function You may assume that the functional commutes with integration over a fixed interval.]
The error in the mid-point rule for numerical quadrature on is given by
Show that if is a linear polynomial. Find the Peano kernel function corresponding to explicitly and verify the formula ( ) in the case .
Paper 2, Section II, D
Part IB, 2018 commentShow that the recurrence relation
where is an inner product on real polynomials, produces a sequence of orthogonal, monic, real polynomials of degree exactly of the real variable , provided that is a monic, real polynomial of degree exactly .
Show that the choice leads to a three-term recurrence relation of the form
where and are constants that should be determined in terms of the inner products and .
Use this recurrence relation to find the first four monic Legendre polynomials, which correspond to the inner product defined by
Paper 1, Section I, C
Part IB, 2019 commentLet be the smallest interval that contains the distinct real numbers , and let be a continuous function on that interval.
Define the divided difference of degree .
Prove that the polynomial of degree that interpolates the function at the points is equal to the Newton polynomial
Prove the recursive formula
for
Paper 4, Section I, C
Part IB, 2019 commentCalculate the factorization of the matrix
Use this to evaluate and to solve the equation
with
Paper 1, Section II, C
Part IB, 2019 comment(a) An -step method for solving the ordinary differential equation
is given by
where and are constant coefficients, with , and is the time-step. Prove that the method is of order if and only if
as , where
(b) Show that the Adams-Moulton method
is of third order and convergent.
[You may assume the Dahlquist equivalence theorem if you state it clearly.]
Paper 3, Section II, C
Part IB, 2019 comment(a) Let be a positive weight function on the interval . Show that
defines an inner product on .
(b) Consider the sequence of polynomials defined by the three-term recurrence relation
where
and the coefficients (for and (for are given by
Prove that this defines a sequence of monic orthogonal polynomials on .
(c) The Hermite polynomials are orthogonal on the interval with weight function . Given that
deduce that the Hermite polynomials satisfy a relation of the form with and . Show that .
(d) State, without proof, how the properties of the Hermite polynomial , for some positive integer , can be used to estimate the integral
where is a given function, by the method of Gaussian quadrature. For which polynomials is the quadrature formula exact?
Paper 2, Section II, C
Part IB, 2019 commentDefine the linear least squares problem for the equation
where is a given matrix with is a given vector and is an unknown vector.
Explain how the linear least squares problem can be solved by obtaining a factorization of the matrix , where is an orthogonal matrix and is an uppertriangular matrix in standard form.
Use the Gram-Schmidt method to obtain a factorization of the matrix
and use it to solve the linear least squares problem in the case
Paper 1, Section I, C
Part IB, 2020 comment(a) Find an factorisation of the matrix
where the diagonal elements of are .
(b) Use this factorisation to solve the linear system , where
Paper 1, Section II, C
Part IB, 2020 comment(a) Given a set of distinct real points and real numbers , show that the interpolating polynomial , can be written in the form
where the coefficients are to be determined.
(b) Consider the approximation of the integral of a function by a finite sum,
where the weights and nodes are independent of . Derive the expressions for the weights that make the approximation ( 1 exact for being any polynomial of degree , i.e. .
Show that by choosing to be zeros of the polynomial of degree , one of a sequence of orthogonal polynomials defined with respect to the scalar product
the approximation (1) becomes exact for (i.e. for all polynomials of degree .
(c) On the interval the scalar product (2) generates orthogonal polynomials given by
Find the values of the nodes for which the approximation (1) is exact for all polynomials of degree 7 (i.e. ) but no higher.
Paper 2, Section II, C
Part IB, 2020 commentConsider a multistep method for numerical solution of the differential equation :
where , and and are constants.
(a) Define the order of a method for numerically solving an ODE.
(b) Show that in general an explicit method of the form has order 1 . Determine the values of and for which this multistep method is of order 3 .
(c) Show that the multistep method (*) is convergent.
Paper 1, Section I, B
Part IB, 2021 commentProve, from first principles, that there is an algorithm that can determine whether any real symmetric matrix is positive definite or not, with the computational cost (number of arithmetic operations) bounded by .
[Hint: Consider the LDL decomposition.]
Paper 4, Section I, B
Part IB, 2021 comment(a) Given the data , find the interpolating cubic polynomial in the Newton form.
(b) We add to the data one more value, . Find the interpolating quartic polynomial for the extended data in the Newton form.
Paper 1, Section II, B
Part IB, 2021 commentFor the ordinary differential equation
where and the function is analytic, consider an explicit one-step method described as the mapping
Here and with time step , producing numerical approximations to the exact solution of equation , with being the initial value of the numerical solution.
(i) Define the local error of a one-step method.
(ii) Let be a norm on and suppose that
for all , where is some positive constant. Let be given and denote the initial error (potentially non-zero). Show that if the local error of the one-step method ( ) is , then
(iii) Let and consider equation where is time-independent satisfying for all , where is a positive constant. Consider the one-step method given by
Use part (ii) to show that for this method we have that equation (††) holds (with a potentially different constant ) for .
Paper 2, Section II, 17B
Part IB, 2021 comment(a) Define Householder reflections and show that a real Householder reflection is symmetric and orthogonal. Moreover, show that if , where is a Householder reflection and is a full matrix, then the computational cost (number of arithmetic operations) of computing can be operations, as opposed to for standard matrix products.
(b) Show that for any there exists an orthogonal matrix such that
In particular, has zero entries below the first subdiagonal. Show that one can compute such a and (they may not be unique) using arithmetic operations.
[Hint: Multiply A from the left and right with Householder reflections.]
Paper 3, Section II, B
Part IB, 2021 commentThe functions are generated by the formula
(a) Show that is a monic polynomial of degree . Write down the explicit forms of .
(b) Demonstrate the orthogonality of these polynomials with respect to the scalar product
i.e. that for , and show that
where .
(c) Assuming that a three-term recurrence relation in the form
holds, find the explicit expressions for and as functions of .
[Hint: you may use the fact that
A1.20 B1.20
Part II, 2001 comment(i) Let be a symmetric matrix such that
Prove that it is positive definite.
(ii) Prove that both Jacobi and Gauss-Seidel methods for the solution of the linear system , where the matrix obeys the conditions of (i), converge.
[You may quote the Householder-John theorem without proof.]
A2.19 B2.19
Part II, 2001 comment(i) Define -step BDF (backward differential formula) methods for the numerical solution of ordinary differential equations and derive explicitly their coefficients.
(ii) Prove that the linear stability domain of the two-step BDF method includes the interval .
A3.19 B3.20
Part II, 2001 comment(i) The diffusion equation
is discretized by the finite-difference method
where and is a constant. Derive the order of magnitude (as a power of ) of the local error for different choices of .
(ii) Investigate the stability of the above finite-difference method for different values of by the Fourier technique.
A4.22 B4.20
Part II, 2001 commentWrite an essay on the computation of eigenvalues and eigenvectors of matrices.
A1.20 B1.20
Part II, 2002 comment(i) Let be an symmetric real matrix with distinct eigenvalues and corresponding eigenvectors , where . Given , the sequence is generated in the following manner. We set
Show that if
where is a real scalar and is chosen so that , then
Give an explicit expression for .
(ii) Use the above result to prove that, if is small,
and obtain the numbers and .
A2.19 B2.19
Part II, 2002 comment(i)
Given the finite-difference method
define
Prove that this method is stable if and only if
[You may quote without proof known properties of the Fourier transform.]
(ii) Find the range of the parameter such that the method
is stable. Supposing that this method is used to solve the diffusion equation for , determine the order of magnitude of the local error as a power of .
A3.19 B3.20
Part II, 2002 comment(i) Determine the order of the multistep method
for the solution of ordinary differential equations for different choices of in the range .
(ii) Prove that no such choice of results in a method whose linear stability domain includes the interval .
A4.22 B4.20
Part II, 2002 commentWrite an essay on the method of conjugate gradients. You should describe the algorithm, present an analysis of its properties and discuss its advantages.
[Any theorems quoted should be stated precisely but need not be proved.]
A1.20 B1.20
Part II, 2003 comment(i) The linear algebraic equations , where is symmetric and positive-definite, are solved with the Gauss-Seidel method. Prove that the iteration always converges.
(ii) The Poisson equation is given in the bounded, simply connected domain , with zero Dirichlet boundary conditions on . It is approximated by the fivepoint formula
where , and is in the interior of .
Assume for the sake of simplicity that the intersection of with the grid consists only of grid points, so that no special arrangements are required near the boundary. Prove that the method can be written in a vector notation, with a negative-definite matrix .
A2.19 B2.19
Part II, 2003 comment(i) Explain briefly what is meant by the convergence of a numerical method for ordinary differential equations.
(ii) Suppose the sufficiently-smooth function obeys the Lipschitz condition: there exists such that
Prove from first principles, without using the Dahlquist equivalence theorem, that the trapezoidal rule
for the solution of the ordinary differential equation
converges.
A3.19 B3.20
Part II, 2003 comment(i) The diffusion equation
with the initial condition and zero boundary conditions at and , is solved by the finite-difference method
where and .
Assuming sufficient smoothness of the function , and that remains constant as and become small, prove that the exact solution satisfies the numerical scheme with error .
(ii) For the problem defined in Part (i), assume that there exist such that . Prove that the method is stable for .
[Hint: You may use without proof the Gerschgorin theorem: All the eigenvalues of the matrix are contained in , where
A4.23 B4.20
Part II, 2003 commentWrite an essay on the conjugate gradient method. Your essay should include:
(a) a statement of the method and a sketch of its derivation;
(b) discussion, without detailed proofs, but with precise statements of relevant theorems, of the conjugacy of the search directions;
(c) a description of the standard form of the algorithm;
(d) discussion of the connection of the method with Krylov subspaces.
A1.20 B1.20
Part II, 2004 comment(i) Define the Backward Difference Formula (BDF) method for ordinary differential equations and derive its two-step version.
(ii) Prove that the interval belongs to the linear stability domain of the twostep BDF method.
A2.19 B2.20
Part II, 2004 comment(i) The five-point equations, which are obtained when the Poisson equation (with Dirichlet boundary conditions) is discretized in a square, are
where for all .
Formulate the Gauss-Seidel method for the above linear system and prove its convergence. In the proof you should carefully state any theorems you use. [You may use Part (ii) of this question.]
(ii) By arranging the two-dimensional arrays and into the column vectors and respectively, the linear system described in Part (i) takes the matrix form . Prove that, regardless of the ordering of the points on the grid, the matrix is symmetric and positive definite.
A3.19 B3.20
Part II, 2004 comment(i) The diffusion equation
with the initial condition , and with zero boundary conditions at and , can be solved by the method
where , and . Prove that implies convergence.
(ii) By discretizing the same equation and employing the same notation as in Part (i), determine conditions on such that the method
is stable.
A4.22 B4.20
Part II, 2004 commentWrite an essay on the method of conjugate gradients. You should define the method, list its main properties and sketch the relevant proof. You should also prove that (in exact arithmetic) the method terminates in a finite number of steps, briefly mention the connection with Krylov subspaces, and describe the approach of preconditioned conjugate gradients.
1.II.38A
Part II, 2005 commentLet
where is a positive integer and ranges over all integers, be a finite-difference method for the advection equation
Here is the Courant number.
(a) Show that the local error of the method is .
(b) Determine the range of for which the method is stable.
2.II.38A
Part II, 2005 commentDefine a Krylov subspace .
Let be the dimension of . Prove that the sequence increases monotonically. Show that, moreover, there exists an integer with the following property: for , while for . Assuming that has a full set of eigenvectors, show that is equal to the number of eigenvectors of required to represent the vector .
3.II.38A
Part II, 2005 commentConsider the Runge-Kutta method
for the solution of the scalar ordinary differential equation . Here is a real parameter.
(a) Determine the order of the method.
(b) Find the range of values of for which the method is A-stable.
4.II.39A
Part II, 2005 commentAn skew-symmetric matrix is converted into an upper-Hessenberg form , say, by Householder reflections.
(a) Describe each step of the procedure and observe that is tridiagonal. Your algorithm should take advantage of the special form of to reduce the number of computations.
(b) Compare the cost (counting only products and looking only at the leading term) of converting a skew-symmetric and a symmetric matrix to an upper-Hessenberg form using Householder reflections.
1.II.38C
Part II, 2006 comment(a) Define the Jacobi method with relaxation for solving the linear system .
(b) Let be a symmetric positive definite matrix with diagonal part such that the matrix is also positive definite. Prove that the iteration always converges if the relaxation parameter is equal to 1 .
(c) Let be the tridiagonal matrix with diagonal elements and off-diagonal elements . Prove that convergence occurs if satisfies . Explain briefly why the choice is optimal.
[You may quote without proof any relevant result about the convergence of iterative methods and about the eigenvalues of matrices.]
2.II.38C
Part II, 2006 commentIn the unit square the Poisson equation , with zero Dirichlet boundary conditions, is being solved by the five-point formula using a square grid of mesh size ,
Let be the exact solution, and let be the error of the five-point formula at the th grid point. Justifying each step, prove that
where is some constant.
3.II.38C
Part II, 2006 comment(a) For the equation , consider the following multistep method with steps,
where is the step size and are specified constants with . Prove that this method is of order if and only if
for any polynomial of degree . Deduce that there is no -step method of order .
[You may use the fact that, for any , the Hermite interpolation problem
is uniquely solvable in the space of polynomials of degree
(b) State the Dahlquist equivalence theorem regarding the convergence of a multistep method. Determine all the values of the real parameter for which the multistep method
is convergent, and determine the order of convergence.
4.II.39C
Part II, 2006 commentThe difference equation
where , is used to approximate a solution of the diffusion equation .
(a) Prove that, as with constant , the local error of the method is .
(b) Applying the Fourier stability test, show that the method is stable if and only if .
1.II.38C
Part II, 2007 comment(a) For a numerical method to solve , define the linear stability domain and state when such a method is A-stable.
(b) Determine all values of the real parameter for which the Runge-Kutta method
is A-stable.
2.II.38C
Part II, 2007 comment(a) State the Householder-John theorem and explain how it can be used to design iterative methods for solving a system of linear equations .
(b) Let where is the diagonal part of , and and are, respectively, the strictly lower and strictly upper triangular parts of . Given a vector , consider the following iterative scheme:
Prove that if is a symmetric positive definite matrix, and , then the above iteration converges to the solution of the system .
3.II.38C
Part II, 2007 comment(a) Prove that all Toeplitz symmetric tridiagonal matrices
share the same eigenvectors with components , , and eigenvalues to be determined.
(b) The diffusion equation
is approximated by the Crank-Nicolson scheme
where , and is an approximation to . Assuming that show that the above scheme can be written in the form
where and the real matrices and should be found. Using matrix analysis, find the range of for which the scheme is stable. [Do not use Fourier analysis.]
4.II.39C
Part II, 2007 comment(a) Suppose that is a real matrix, and that and are given so that . Further, let be a non-singular matrix such that , where is the first coordinate vector and . Let . Prove that the eigenvalues of are together with the eigenvalues of the bottom right submatrix of
(b) Suppose again that is a real matrix, and that two linearly independent vectors are given such that the linear subspace spanned by and is invariant under the action of , i.e.,
Denote by an matrix whose two columns are the vectors and , and let be a non-singular matrix such that is upper triangular, that is,
Again let . Prove that the eigenvalues of are the eigenvalues of the top left submatrix of together with the eigenvalues of the bottom right submatrix of
1.II.38C
Part II, 2008 commentThe Poisson equation in the unit square , with zero boundary conditions on , is discretized with the nine-point formula
where , and are grid points.
(a) Prove that, for any ordering of the grid points, the method can be written as with a symmetric positive-definite matrix .
(b) Describe the Jacobi method for solving a linear system of equations, and prove that it converges for the above system.
[You may quote without proof the corollary of the Householder-John theorem regarding convergence of the Jacobi method.]
2.II.38C
Part II, 2008 commentThe advection equation
is solved by the leapfrog scheme
where and is the Courant number.
(a) Determine the local error of the method.
(b) Applying the Fourier technique, find the range of for which the method is stable.
3.II.38C
Part II, 2008 comment(a) A numerical method for solving the ordinary differential equation
generates for every a sequence , where is an approximation to and . Explain what is meant by the convergence of the method.
(b) Prove from first principles that if the function is sufficiently smooth and satisfies the Lipschitz condition
for some , then the trapezoidal rule
converges.
4.II.39C
Part II, 2008 commentLet be a real matrix with linearly independent eigenvectors. When calculating eigenvalues of , the sequence , is generated by power method , where is a real nonzero vector.
(a) Describe the asymptotic properties of the sequence , both in the case where the eigenvalues of satisfy , and in the case where . In the latter case explain how the (possibly complexvalued) eigenvalues and their corresponding eigenvectors can be determined.
(b) Let , and suppose that, for a large , we obtain the vectors
Find two eigenvalues of and their corresponding eigenvectors.
Paper 3, Section II, B
Part II, 2009 commentProve that all Toeplitz tridiagonal matrices of the form
share the same eigenvectors , with the components , where , and find their eigenvalues.
The advection equation
is approximated by the Crank-Nicolson scheme
where , and is an approximation to . Assuming that , show that the above scheme can be written in the form
where and the real matrices and should be found. Using matrix analysis, find the range of for which the scheme is stable. [Fourier analysis is not acceptable.]
Paper 2, Section II, B
Part II, 2009 commentThe Poisson equation in the unit square , equipped with appropriate boundary conditions on , is discretized with the nine-point formula:
where , and are grid points.
(i) Find the local error of approximation.
(ii) Prove that the error is smaller if happens to satisfy the Laplace equation
(iii) Hence show that the modified nine-point scheme
has the same smaller error as in (ii).
[Hint. The nine-point discretization of can be written as
where is the five-point discretization and
Paper 1, Section II, B
Part II, 2009 comment(i) Define the Jacobi method with relaxation for solving the linear system .
(ii) For and being the exact and the iterated solution, respectively, let be the error and the iteration matrix, so that
Express in terms of the matrix , its diagonal part and the relaxation parameter , and find the eigenvectors and the eigenvalues of for the tridiagonal matrix
[Hint: The eigenvectors and eigenvalues of are
(iii) For as above, let
be the expansion of the error with respect to the eigenvectors of .
Find the range of parameter which provides convergence of the method for any , and prove that, for any such , the rate of convergence is not faster than .
(iv) Show that, for some , the high frequency components of the error tend to zero much faster. Determine the optimal parameter which provides the largest suppression of the high frequency components per iteration, and find the corresponding attenuation factor (i.e. the least such that for .
Paper 4, Section II, B
Part II, 2009 comment(a) For the -step -order Backward Differentiation Formula (BDF) for ordinary differential equations,
express the polynomial in a convenient explicit form.
(b) Prove that the interval belongs to the linear stability domain of the 2-step BDF method.
Paper 1, Section II, A
Part II, 2010 comment(a) State the Householder-John theorem and explain its relation to the convergence analysis of splitting methods for solving a system of linear equations with a positive definite matrix .
(b) Describe the Jacobi method for solving a system , and deduce from the above theorem that if is a symmetric positive definite tridiagonal matrix,
then the Jacobi method converges.
[Hint: At the last step, you may find it useful to consider two vectors and .]
Paper 2, Section II, A
Part II, 2010 commentThe inverse discrete Fourier transform is given by the formula
Here, is the primitive root of unity of degree , and
(1) Show how to assemble in a small number of operations if we already know the Fourier transforms of the even and odd portions of :
(2) Describe the Fast Fourier Transform (FFT) method for evaluating and draw a relevant diagram for .
(3) Find the costs of the FFT for (only multiplications count).
(4) For , using the FFT technique, find for and
Paper 3, Section II, A
Part II, 2010 commentThe Poisson equation in the unit square on , is discretized with the five-point formula
where and are grid points.
Let be the exact solution, and let be the error of the five-point formula at the th grid point. Justifying each step, prove that
where is some constant independent of .
Paper 4, Section II, A
Part II, 2010 commentAn -stage explicit Runge-Kutta method of order , with constant step size , is applied to the differential equation .
(a) Prove that
where is a polynomial of degree .
(b) Prove that the order of any -stage explicit Runge-Kutta method satisfies the inequality and, for , write down an explicit expression for .
(c) Prove that no explicit Runge-Kutta method can be A-stable.
Paper 1, Section II, A
Part II, 2011 commentThe nine-point method for the Poisson equation (with zero Dirichlet boundary conditions) in a square, reads
where , for all .
(i) By arranging the two-dimensional arrays and into column vectors and respectively, the linear system above takes the matrix form . Prove that, regardless of the ordering of the points on the grid, the matrix is symmetric and negative definite.
(ii) Formulate the Jacobi method with relaxation for solving the above linear system.
(iii) Prove that the iteration converges if the relaxation parameter is equal to
[You may quote without proof any relevant result about convergence of iterative methods.]
Paper 2, Section II, A
Part II, 2011 commentLet be a real matrix with linearly independent eigenvectors. The eigenvalues of can be calculated from the sequence , which is generated by the power method
where is a real nonzero vector.
(i) Describe the asymptotic properties of the sequence in the case that the eigenvalues of satisfy , and the eigenvectors are of unit length.
(ii) Present the implementation details for the power method for the setting in (i) and define the Rayleigh quotient.
(iii) Let be the matrix
where is real and nonzero. Find an explicit expression for
Let the sequence be generated by the power method as above. Deduce from your expression for that the first and second components of tend to zero as . Further show that this implies as .
Paper 3, Section II, A
Part II, 2011 comment(i) The difference equation
where , is the basic equation used in the second-order AdamsBashforth method and can be employed to approximate a solution of the diffusion equation . Prove that, as with constant , the local error of the method is .
(ii) By applying the Fourier stability test, show that the above method is stable if and only if .
(iii) Define the leapfrog scheme to approximate the diffusion equation and prove that it is unstable for every choice of .
Paper 4, Section II, A
Part II, 2011 comment(i) Consider the Poisson equation
with the periodic boundary conditions
and the normalization condition
Moreover, is analytic and obeys the periodic boundary conditions
Derive an explicit expression of the approximation of a solution by means of a spectral method. Explain the term convergence with spectral speed and state its validity for the approximation of .
(ii) Consider the second-order linear elliptic partial differential equation
with the periodic boundary conditions and normalization condition specified in (i). Moreover, and are given by
[Note that is a positive analytic periodic function.]
Construct explicitly the linear algebraic system that arises from the implementation of a spectral method to the above equation.
Paper 4, Section II,
Part II, 2012 comment(i) Formulate the conjugate gradient method for the solution of a system with and .
(ii) Prove that if the conjugate gradient method is applied in exact arithmetic then, for any , termination occurs after at most iterations.
(iii) The polynomial is the minimal polynomial of the matrix if it is the polynomial of lowest degree that satisfies Note that .] Give an example of a symmetric positive definite matrix with a quadratic minimal polynomial.
Prove that (in exact arithmetic) the conjugate gradient method requires at most iterations to calculate the exact solution of , where is the degree of the minimal polynomial of .
Paper 2, Section II, D
Part II, 2012 comment(i) The diffusion equation
with the initial condition , and with zero boundary conditions at and , can be solved numerically by the method
where , and . Prove that implies convergence.
(ii) By discretising the diffusion equation and employing the same notation as in (i) above, determine [without using Fourier analysis] conditions on and the constant such that the method
is stable.
Paper 3, Section II, D
Part II, 2012 commentThe inverse discrete Fourier transform is given by the formula
Here, is the primitive root of unity of degree and
(i) Show how to assemble in a small number of operations if the Fourier transforms of the even and odd parts of ,
are already known.
(ii) Describe the Fast Fourier Transform (FFT) method for evaluating , and draw a relevant diagram for .
(iii) Find the costs of the FFT method for (only multiplications count).
(iv) For use the FFT method to find when:
(a) ,
(b) .
Paper 1, Section II, D
Part II, 2012 commentThe Poisson equation in the unit interval on is discretised with the formula
where and are the grid points.
(i) Define the above system of equations in vector form and describe the relaxed Jacobi method with relaxation parameter for solving this linear system. For and being the exact solution and the iterated solution respectively, let be the error and the iteration matrix, so that
Express in terms of the matrix , the diagonal part of and , and find the eigenvectors and the eigenvalues of .
(ii) For as above, let
be the expansion of the error with respect to the eigenvectors of . Derive conditions on such that the method converges for any , and prove that, for any such , the rate of convergence of is not faster than .
(iii) Show that, for some , the high frequency components of the error tend to zero much faster than . Determine the optimal parameter which provides the largest suppression of the high frequency components per iteration, and find the corresponding attenuation factor (i.e., the least such that for .
Paper 4, Section II, C
Part II, 2013 commentConsider the solution of the two-point boundary value problem
with periodic boundary conditions at and . Construct explicitly the linear algebraic system that arises from the application of a spectral method to the above equation.
The Fourier coefficients of are defined by
Prove that the computation of the Fourier coefficients for the truncated system with (where is an even and positive integer, and assuming that outside this range of ) reduces to the solution of a tridiagonal system of algebraic equations, which you should specify.
Explain the term convergence with spectral speed and justify its validity for the derived approximation of .
Paper 2, Section II, C
Part II, 2013 commentConsider the advection equation on the unit interval and , where , subject to the initial condition and the boundary condition , where is a given smooth function on .
(i) We commence by discretising the advection equation above with finite differences on the equidistant space-time grid with and . We obtain an equation for that reads
with the condition for all and .
What is the order of approximation (that is, the order of the local error) in space and time of the above discrete solution to the exact solution of the advection equation? Write the scheme in matrix form and deduce for which choices of this approximation converges to the exact solution. State (without proof) any theorems you use. [You may use the fact that for a tridiagonal matrix
the eigenvalues are given by .]
(ii) How does the order change when we replace the central difference approximation of the first derivative in space by forward differences, that is instead of For which choices of is this new scheme convergent?
(iii) Instead of the approximation in (i) we consider the following method for numerically solving the advection equation,
where we additionally assume that is given. What is the order of this method for a fixed ?
Paper 3, Section II, C
Part II, 2013 comment(i) Suppose that is a real matrix, and that and are given so that . Further, let be a non-singular matrix such that , where is the first coordinate vector and . Let . Prove that the eigenvalues of are together with the eigenvalues of the bottom right submatrix of .
(ii) Suppose again that is a real matrix, and that two linearly independent vectors are given such that the linear subspace spanned by and is invariant under the action of , that is
Denote by an matrix whose two columns are the vectors and , and let be a non-singular matrix such that is upper triangular, that is
Again, let . Prove that the eigenvalues of are the eigenvalues of the top left submatrix of together with the eigenvalues of the bottom right submatrix of
Paper 1, Section II, 40C
Part II, 2013 commentLet
(i) For which values of is positive definite?
(ii) Formulate the Gauss-Seidel method for the solution of a system
with as defined above and . Prove that the Gauss-Seidel method converges to the solution of the above system whenever is positive definite. [You may state and use the Householder-John theorem without proof.]
(iii) For which values of does the Jacobi iteration applied to the solution of the above system converge?
Paper 4, Section II, D
Part II, 2014 commentLet be a real symmetric matrix with distinct real eigenvalues and a corresponding orthogonal basis of normalized real eigenvectors .
(i) Let satisfy . Given a unit vector , the iteration scheme
generates a sequence of vectors for . Assuming that with , prove that tends to as . What happens to if ? [Consider all cases.]
(ii) Describe how to implement an inverse-iteration algorithm to compute the eigenvalues and eigenvectors of , given some initial estimates for the eigenvalues.
(iii) Let . For iterates of an inverse-iteration algorithm with a fixed value of , show that if
where is small, then is of the same order of magnitude as .
(iv) Let still. Consider the iteration scheme
for , where , denotes the inner product. Show that with this scheme
Paper 2, Section II, D
Part II, 2014 commentConsider the one-dimensional advection equation
subject to an initial condition . Consider discretization of this equation with finite differences on an equidistant space-time with step size in space and step size in time. Define the Courant number and explain briefly how such a discretization can be used to derive numerical schemes in which solutions and satisfy equations of the form
where the coefficients are independent of .
(i) Define the order of a numerical scheme such as (1). Define what a convergent numerical scheme is. Explain the notion of stability and state the Lax equivalence theorem that connects convergence and stability of numerical schemes for linear partial differential equations.
(ii) Consider the following example of (1):
Determine conditions on such that the scheme is stable and convergent. What is the order of this scheme?
Paper 3, Section II, D
Part II, 2014 commentConsider the linear system
where and .
(i) Define the Jacobi iteration method with relaxation parameter for solving (1).
(ii) Assume that is a symmetric positive-definite matrix whose diagonal part is such that the matrix is also positive definite. Prove that the relaxed Jacobi iteration method always converges if the relaxation parameter is equal to
(iii) Let be the tridiagonal matrix with diagonal elements and off-diagonal elements , where . For which values of (expressed in terms of and ) does the relaxed Jacobi iteration method converge? What choice of gives the optimal convergence speed?
[You may quote without proof any relevant results about the convergence of iterative methods and about the eigenvalues of matrices.]
Paper 1, Section II, 40D
Part II, 2014 comment(i) Consider the numerical approximation of the boundary-value problem
where are given constants and is a given smooth function on . A grid , on is given by
where and . Derive finite-difference approximations for , for , using at most one neighbouring grid point of on each side. Hence write down a numerical scheme to solve the problem, displaying explicitly the entries of the system matrix in the resulting system of linear equations , . What is the overall order of this numerical scheme? Explain briefly one strategy by which the order could be improved with the same grid.
(ii) Consider the numerical approximation of the boundary-value problem
where is an arbitrary, simply connected bounded domain with smooth boundary , and is a given smooth function. Define the 9-point formula used to approximate the Laplacian. Using this formula and an equidistant grid inside , define a numerical scheme for which the system matrix is symmetric and negative definite. Prove that the system matrix of your scheme has these properties for all choices of ordering of the grid points.
Part II, List of Questions
[TURN OVER
Paper 4, Section II, E
Part II, 2015 comment(a) Define the th Krylov space for and . Letting be the dimension of , prove the following results.
(i) There exists a positive integer such that for and for .
(ii) If , where are eigenvectors of for distinct eigenvalues and all are nonzero, then .
(b) Define the term residual in the conjugate gradient (CG) method for solving a system with symmetric positive definite . Explain (without proof) the connection to Krylov spaces and prove that for any right-hand side the CG method finds an exact solution after at most steps, where is the number of distinct eigenvalues of . [You may use without proof known properties of the iterates of the CG method.]
Define what is meant by preconditioning, and explain two ways in which preconditioning can speed up convergence. Can we choose the preconditioner so that the CG method requires only one step? If yes, is it a reasonable method for speeding up the computation?
Paper 2, Section II, E
Part II, 2015 comment(a) The boundary value problem on the unit square with zero boundary conditions and scalar constant is discretised using finite differences as
with . Show that for the resulting system , for a suitable matrix and vectors and , both the Jacobi and Gauss-Seidel methods converge. [You may cite and use known results on the discretised Laplace operator and on the convergence of iterative methods.]
Define the Jacobi method with relaxation parameter . Find the eigenvalues of the iteration matrix for the above problem and show that, in order to ensure convergence for all , the condition is necessary.
[Hint: The eigenvalues of the discretised Laplace operator in two dimensions are for integers .]
(b) Explain the components and steps in a multigrid method for solving the Poisson equation, discretised as . If we use the relaxed Jacobi method within the multigrid method, is it necessary to choose to get fast convergence? Explain why or why not.
Paper 3, Section II, E
Part II, 2015 comment(a) Given the finite-difference recurrence
that discretises a Cauchy problem, the amplification factor is defined by
Show how acts on the Fourier transform of . Hence prove that the method is stable if and only if for all .
(b) The two-dimensional diffusion equation
for some scalar constant is discretised with the forward Euler scheme
Using Fourier stability analysis, find the range of values for which the scheme is stable.
Paper 1, Section II, 38E
Part II, 2015 comment(a) The diffusion equation
with the initial condition in and zero boundary conditions at and , is solved by the finite-difference method
where , and .
Assuming that the function and the exact solution are sufficiently smooth, prove that the exact solution satisfies the numerical scheme with error for constant .
(b) For the problem in part (a), assume that there exist such that for all . State (without proof) the Gershgorin theorem and prove that the method is stable for .
Paper 4, Section II, B
Part II, 2016 comment(a) Describe an implementation of the power method for determining the eigenvalue of largest modulus and its associated eigenvector for a matrix that has a unique eigenvalue of largest modulus.
Now let be a real matrix with distinct eigenvalues satisfying and . The power method is applied to , with an initial condition such that , where is the eigenvector associated with . Show that the power method does not converge. Explain why and become linearly dependent as .
(b) Consider the following variant of the power method, called the two-stage power method, applied to the matrix of part (a):
Pick satisfying . Let . Set and .
Calculate and calculate that minimise
If , solve and let the roots be and . They are accepted as eigenvalues of , and the corresponding eigenvectors are estimated as and .
Otherwise, divide and by the current value of , increase by 1 and return to Step .
Explain the justification behind Step 2 of the algorithm.
(c) Let , and suppose that, for a large value of , the two-stage power method yields the vectors
Find two eigenvalues of and the corresponding eigenvectors.
Paper 2, Section II, B
Part II, 2016 comment(a) The advection equation
is discretised using an equidistant grid with stepsizes and . The spatial derivatives are approximated with central differences and the resulting ODEs are approximated with the trapezoidal rule. Write down the relevant difference equation for determining from . What is the name of this scheme? What is the local truncation error?
The boundary condition is periodic, . Explain briefly how to write the discretised scheme in the form , where the matrices and , to be identified, have a circulant form. Using matrix analysis, find the range of for which the scheme is stable. [Standard results may be used without proof if quoted carefully.]
[Hint: An circulant matrix has the form
All such matrices have the same set of eigenvectors , where , and the corresponding eigenvalues are .]
(b) Consider the advection equation on the unit square
where satisfies doubly periodic boundary conditions, , and and are given doubly periodic functions. The system is discretised with the Crank-Nicolson scheme, with central differences for the space derivatives, using an equidistant grid with stepsizes and . Write down the relevant difference equation, and show how to write the scheme in the form
where the matrix should be identified. Describe how (*) can be approximated by Strang splitting, and explain the advantages of doing so.
[Hint: Inversion of the matrix in part (a) has a similar computational cost to that of a tridiagonal matrix.]
Paper 1, Section II, B
Part II, 2016 comment(a) Consider the periodic function
on the interval . The -point discrete Fourier transform of is defined by
where and . Compute , using the Fast Fourier Transform (FFT). Compare your result with what you get by computing directly from . Carefully explain all your computations.
(b) Now let be any analytic function on that is periodic with period 1 . The continuous Fourier transform of is defined by
Use integration by parts to show that the Fourier coefficients decay spectrally.
Explain what it means for the discrete Fourier transform of to approximate the continuous Fourier transform with spectral accuracy. Prove that it does so.
What can you say about the behaviour of as for fixed ?
Paper 3, Section II, B
Part II, 2016 comment(a) Define the Jacobi and Gauss-Seidel iteration schemes for solving a linear system of the form , where and , giving formulae for the corresponding iteration matrices and in terms of the usual decomposition .
Show that both iteration schemes converge when results from discretization of Poisson's equation on a square with the five-point formula, that is when
and . [You may state the Householder-John theorem without proof.]
(b) For the matrix given in :
(i) Calculate the eigenvalues of and deduce its spectral radius .
(ii) Show that each eigenvector of is related to an eigenvector of by a transformation of the form for a suitable value of .
(iii) Deduce that . What is the significance of this result for the two iteration schemes?
[Hint: You may assume that the eigenvalues of the matrix in are
with corresponding eigenvectors
Paper 2, Section II, A
Part II, 2017 commentThe Poisson equation in the unit square , equipped with the zero Dirichlet boundary conditions on , is discretized with the nine-point formula:
where , and are the grid points with .
(i) Find the order of the local truncation error of the approximation.
(ii) Prove that the order of the truncation error is smaller if satisfies the Laplace equation .
(iii) Show that the modified nine-point scheme
has a truncation error of the same order as in part (ii).
(iv) Let be a solution to the system of linear equations arising from the modified nine-point scheme in part (iii). Further, let be the exact solution and let be the error of approximation at grid points. Prove that there exists a constant such that
[Hint: The nine-point discretization of can be written as
where is the five-point discretization and
[Hint: The matrix A of the nine-point scheme is symmetric, with the eigenvalues
Paper 1, Section II, A
Part II, 2017 commentState the Householder-John theorem and explain how it can be used in designing iterative methods for solving a system of linear equations . [You may quote other relevant theorems if needed.]
Consider the following iterative scheme for solving . Let , where is the diagonal part of , and and are the strictly lower and upper triangular parts of , respectively. Then, with some starting vector , the scheme is as follows:
Prove that if is a symmetric positive definite matrix and , then, for any , the above iteration converges to the solution of the system .
Which method corresponds to the case
Paper 3, Section II, A
Part II, 2017 commentLet be a real symmetric matrix with real and distinct eigenvalues and a corresponding orthogonal basis of normalized real eigenvectors .
To estimate the eigenvector of whose eigenvalue is , the power method with shifts is employed which has the following form:
Three versions of this method are considered:
(i) no shift: ;
(ii) single shift: ;
(iii) double shift: .
Assume that , where is very small, so that the terms are negligible, and that contains substantial components of all the eigenvectors.
By considering the approximation after iterations in the form
find as a function of for each of the three versions of the method.
Compare the convergence rates of the three versions of the method, with reference to the number of iterations needed to achieve a prescribed accuracy.
Paper 4, Section II, A
Part II, 2017 comment(a) The diffusion equation
is approximated by the Crank-Nicolson scheme
with . Here , and is an approximation to . Assuming that , show that the above scheme can be written in the form
where and the real matrices and should be found. Using matrix analysis, find the range of for which the scheme is stable.
[Hint: All Toeplitz symmetric tridiagonal (TST) matrices have the same set of orthogonal eigenvectors, and a TST matrix with the elements and has the eigenvalues . ]
(b) The wave equation
with given initial conditions for and , is approximated by the scheme
with the Courant number now . Applying the Fourier technique, find the range of for which the method is stable.
Paper 4, Section II, E
Part II, 2018 commentThe inverse discrete Fourier transform is given by the formula
Here, is the primitive root of unity of degree and
(a) Show how to assemble in a small number of operations if the Fourier transforms of the even and odd parts of ,
are already known.
(b) Describe the Fast Fourier Transform (FFT) method for evaluating , and draw a diagram to illustrate the method for .
(c) Find the cost of the FFT method for (only multiplications count).
(d) For use the FFT method to find when: (i) , (ii) .
Paper 2, Section II, E
Part II, 2018 commentThe Poisson equation in the unit interval , with , is discretised with the formula
where , the grid points are at and .
(a) Write the above system of equations in the vector form and describe the relaxed Jacobi method with relaxation parameter for solving this linear system.
(b) For and being the exact and the iterated solution, respectively, let be the error and be the iteration matrix, so that
Express in terms of the matrix and the relaxation parameter . Using the fact that for any Toeplitz symmetric tridiagonal matrix, the eigenvectors have the form:
find the eigenvalues of . Hence deduce the eigenvalues of .
(c) For as above, let
be the expansion of the error with respect to the eigenvectors of .
Find the range of the parameter which provides convergence of the method for any , and prove that, for any such , the rate of convergence is not faster than when is large.
(d) Show that, for an appropriate range of , the high frequency components of the error tend to zero much faster than the rate obtained in part (c). Determine the optimal parameter which provides the largest supression of the high frequency components per iteration, and find the corresponding attenuation factor assuming is large. That is, find the least such that for .
Paper 1, Section II, E
Part II, 2018 comment(a) Suppose that is a real matrix, and and are given so that . Further, let be a non-singular matrix such that , where is the first coordinate vector and .
Let . Prove that the eigenvalues of are together with the eigenvalues of the bottom right submatrix of .
Explain briefly how, given a vector , an orthogonal matrix such that can be constructed.
(b) Suppose that is a real matrix, and two linearly independent vectors are given such that the linear subspace spanned by and is invariant under the action of , i.e.,
Denote by an matrix whose two columns are the vectors and , and let be a non-singular matrix such that is upper triangular:
Again, let . Prove that the eigenvalues of are the eigenvalues of the top left submatrix of together with the eigenvalues of the bottom right submatrix of .
Explain briefly how, for given vectors , an orthogonal matrix which satisfies (*) can be constructed.
Paper 3, Section II, E
Part II, 2018 commentThe diffusion equation for :
is solved numerically by the difference scheme
Here is the Courant number, with , and .
(a) Prove that, as with constant , the local error of the method is .
(b) Applying the Fourier stability analysis, show that the method is stable if and only if . [Hint: If a polynomial has real roots, then those roots lie in if and only if and .]
(c) Prove that, for the same equation, the leapfrog scheme
is unstable for any choice of .
Paper 4, Section II, C
Part II, 2019 commentFor a 2-periodic analytic function , its Fourier expansion is given by the formula
(a) Consider the two-point boundary value problem
with periodic boundary conditions . Construct explicitly the infinite dimensional linear algebraic system that arises from the application of the Fourier spectral method to the above equation, and explain how to truncate the system to a finitedimensional one.
(b) A rectangle rule is applied to computing the integral of a 2-periodic analytic function :
Find an expression for the error of , in terms of , and show that has a spectral rate of decay as . [In the last part, you may quote a relevant theorem about .]
Paper 2, Section II, C
Part II, 2019 commentThe Poisson equation on the unit square, equipped with zero boundary conditions, is discretized with the 9-point scheme:
where , and are the grid points with . We also assume that .
(a) Prove that all tridiagonal symmetric Toeplitz (TST-) matrices
share the same eigenvectors with the components for . Find expressions for the corresponding eigenvalues for . Deduce that , where and is the matrix
(b) Show that, by arranging the grid points ( into a one-dimensional array by columns, the 9 -points scheme results in the following system of linear equations of the form
where , the vectors are portions of , respectively, and are TST-matrices whose elements you should determine.
(c) Using , show that (2) is equivalent to
where and are diagonal matrices.
(d) Show that, by appropriate reordering of the grid, the system (3) is reduced to uncoupled systems of the form
Determine the elements of the matrices .
Paper 3, Section II, 40C
Part II, 2019 commentThe diffusion equation
with the initial condition , and boundary conditions , is discretised by with . The Courant number is given by .
(a) The system is solved numerically by the method
Prove directly that implies convergence.
(b) Now consider the method
where and are real constants. Using an eigenvalue analysis and carefully justifying each step, determine conditions on and for this method to be stable.
[You may use the notation for the tridiagonal matrix with along the diagonal, and along the sub-and super-diagonals and use without proof any relevant theorems about such matrices.]
Paper 1, Section II, C
Part II, 2019 comment(a) Describe the Jacobi method for solving a system of linear equations as a particular case of splitting, and state the criterion for its convergence in terms of the iteration matrix.
(b) For the case when
find the exact range of the parameter for which the Jacobi method converges.
(c) State the Householder-John theorem and deduce that the Jacobi method converges if is a symmetric positive-definite tridiagonal matrix.
Paper 1, Section II, E
Part II, 2020 commentLet be a real symmetric matrix with distinct eigenvalues and a corresponding orthonormal basis of real eigenvectors . Given a unit norm vector , and a set of parameters , consider the inverse iteration algorithm
(a) Let const for all . Assuming that with all , prove that
Explain briefly what happens to when for some , and when .
(b) Let for . Assuming that, for some , some and for a small ,
where is the appropriate normalising constant. Show that and determine the value of . Hence show that
where is the appropriate normalising constant, and find expressions for .
Paper 2, Section II, 40E
Part II, 2020 comment(a) For and nonzero , define the -th Krylov subspace of . Prove that if has linearly independent eigenvectors with at most distinct eigenvalues, then
(b) Define the term residual in the conjugate gradient (CG) method for solving a system with a symmetric positive definite . State two properties of the method regarding residuals and their connection to certain Krylov subspaces, and hence show that, for any right-hand side , the method finds the exact solution after at most iterations, where is the number of distinct eigenvalues of .
(c) The preconditioned CG-method is applied for solving , with
Prove that the method finds the exact solution after two iterations at most.
(d) Prove that, for any symmetric positive definite , we can find a preconditioner such that the preconditioned CG-method for solving would require only one step. Explain why this preconditioning is of hardly any use.
Paper 3, Section II, 40E
Part II, 2020 comment(a) Give the definition of a normal matrix. Prove that if is normal, then the (Euclidean) matrix -norm of is equal to its spectral radius, i.e., .
(b) The advection equation
is discretized by the Crank-Nicolson scheme
Here, is the Courant number, with , and is an approximation to .
Using the eigenvalue analysis and carefully justifying each step, determine conditions on for which the method is stable. [Hint: All M Toeplitz anti-symmetric tridiagonal (TAT) matrices have the same set of orthogonal eigenvectors, and a TAT matrix with the elements and has the eigenvalues where . ]
(c) Consider the same advection equation for the Cauchy problem . Now it is discretized by the two-step leapfrog scheme
Applying the Fourier technique, find the range of for which the method is stable.
Paper 4 , Section II, 40E
Part II, 2020 comment(a) For a function which is real analytic in and 2-periodic in each variable, its Fourier expansion is given by the formula
Derive expressions for the Fourier coefficients of partial derivatives and those of the product in terms of and .
(b) Let be the 2-periodic solution in of the general second-order elliptic PDE
where and are both real analytic and 2-periodic, and . We impose the normalisation condition and note from the PDE .
Construct explicitly the infinite-dimensional linear algebraic system that arises from the application of the Fourier spectral method to the above equation, and explain how to truncate this system to a finite-dimensional one.
(c) Specify the truncated system for the unknowns for the case
and prove that, for any ordering of the Fourier coefficients into one-dimensional array, the resulting system is symmetric and positive definite. [You may use the Gershgorin theorem without proof.]
Paper 1, Section II, E
Part II, 2021 commentLet with and define is not invertible .
The QR algorithm for computing is defined as follows. Set . For compute the factorization and set . (Here is an orthogonal matrix and is an upper triangular matrix.)
(a) Show that is related to the original matrix by the similarity transformation , where is orthogonal and is the QR factorization of with .
(b) Suppose that is symmetric and that its eigenvalues satisfy
Suppose, in addition, that the first two canonical basis vectors are given by , , where for and are the normalised eigenvectors of .
Let be the upper left corner of . Show that as , where and denotes the Hausdorff metric
[Hint: You may use the fact that for real symmetric matrices we have
Paper 2, Section II, 41E
Part II, 2021 comment(a) Let and define by
Let be defined as the discrete Fourier transform (DFT) of , i.e.
Show that
(b) Define the discrete cosine transform by
For with , show that, similar to the Fast Fourier Transform (FFT), there exists an algorithm that computes the DCT of a vector of length , where the number of multiplications required is bounded by , where is some constant independent of .
[You may not assume that the FFT algorithm requires multiplications to compute the DFT of a vector of length . If you use this, you must prove it.]
Paper 3, Section II, 40E
Part II, 2021 commentConsider discretisation of the diffusion equation
by the Crank-Nicholson method:
where is the Courant number, is the step size in the space discretisation, is the step size in the time discretisation, and , where is the solution of . The initial condition is given.
(a) Consider the Cauchy problem for on the whole line, (thus ), and derive the formula for the amplification factor of the Crank-Nicholson method ( ). Use the amplification factor to show that the Crank-Nicholson method is stable for the Cauchy problem for all .
[You may quote basic properties of the Fourier transform mentioned in lectures, but not the theorem on sufficient and necessary conditions on the amplification factor to have stability.]
(b) Consider on the interval (thus and ) with Dirichlet boundary conditions and , for some sufficiently smooth functions and . Show directly (without using the Lax equivalence theorem) that, given sufficient smoothness of , the Crank-Nicholson method is convergent, for any , in the norm defined by for .
[You may assume that the Trapezoidal method has local order 3 , and that the standard three-point centred discretisation of the second derivative (as used in the CrankNicholson method) has local order 2.]
Paper 4 , Section II, 40E
Part II, 2021 comment(a) Show that if and are real matrices such that both and are symmetric positive definite, then the spectral radius of is strictly less than
(b) Consider the Poisson equation (with zero Dirichlet boundary condition) on the unit square, where is some smooth function. Given and an equidistant grid on the unit square with stepsize , the standard five-point method is given by
where and . Equation can be written as a linear system , where and both depend on the chosen ordering of the grid points.
Use the result in part (a) to show that the Gauss-Seidel method converges for the linear system described above, regardless of the choice of ordering of the grid points.
[You may quote convergence results - based on the spectral radius of the iteration matrix - mentioned in the lecture notes.]